Predicting Image Differences Based on Image-Difference Features

نویسندگان

  • Ingmar Lissner
  • Jens Preiss
  • Philipp Urban
چکیده

An accurate image-difference measure would greatly simplify the optimization of imaging systems and image processing algorithms. The prediction performance of existing methods is limited because the visual mechanisms responsible for assessing image differences are not well understood. This applies especially to the cortical processing of complex visual stimuli. We propose a flexible image-difference framework that models these mechanisms using an empirical data-mining strategy. A pair of input images is first normalized to specific viewing conditions by an image appearance model. Various image-difference features (IDFs) are then extracted from the images. These features represent assumptions about visual mechanisms that are responsible for judging image differences. Several IDFs are combined in a blending step to optimize the correlation between image-difference predictions and corresponding human assessments. We tested our method on the Tampere Image Database 2008, where it showed good correlation with subjective judgments. Comparisons with other image-difference measures were also performed. Introduction An image difference-measure (IDM) that accurately predicts human judgments is the Holy Grail of perception-based image processing. An IDM takes two images and parameters that specify the viewing conditions (e.g., viewing distance, illuminant, and luminance level). It returns a prediction of the perceived difference between the images under the specified viewing conditions. An accurate IDM could supersede tedious psychophysical experiments that are required to optimize imaging systems and image processing algorithms. In the past decades many attempts were made to create increasingly sophisticated IDMs. Unfortunately, evaluations show that they cannot replace human judgments for a wide range of distortions and arbitrary images so far [1, 2]. How an observer perceives a distortion depends on his interpretation of the image content — for example, changing a person’s skin color is likely to cause a larger perceived difference than changing the color of a wall by the same amount. It is therefore improbable that IDMs will perfectly predict human perception before the cortical visual processing is comprehensively understood. However, IDMs could provide a reasonable median prediction of human judgments for only a few selected distortions, e.g., lossy compression or gamut mapping. The Role of Image Appearance Models Many IDMs use image appearance models such as S-CIELAB [3], Pattanaik’s multiscale model [4], or iCAM [5, 6] to transform the input images into an opponent color space defined for specific viewing conditions (e.g., 10 observer, illuminant D65, and average viewing distance). This can be seen as a normalization of the images to the given viewing conditions. Advanced models also consider various appearance phenomena to adjust pixel values to human perception. Typically, they account for spatial properties of the visual system by convolving the images with the chromatic and achromatic contrast sensitivity functions. This allows a meaningful pixelwise comparison of, e.g., halftone and continuous-tone images. For instance, S-CIELAB has been used as an IDM [7] in combination with the CIEDE2000 [8] color-difference formula. Note that image appearance models are still an active research area and have room for improvement. Ideally, they normalize an input image to specific viewing conditions and remove imperceptible content. The result is an image in an opponent color space from which color attributes (lightness, chroma, and hue) can be obtained for each pixel. This space is referred to as the working color space in the following. The Role of the Color Space It is advantageous for image-difference analysis if the working color space is highly perceptually uniform, meaning that Euclidean distances correlate well with perceived color differences. Note that a color space cannot be perfectly perceptually uniform because of geometrical issues and the effect of diminishing returns in color-difference perception [9]. In addition, color-difference data is obtained using color patches instead of complex visual stimuli. Nevertheless, image gradients and edges require perceptually meaningful normalization, i.e., their perceptual magnitudes should be reflected by the corresponding values as closely as possible. Analyzing such image features in a highly non-uniform color space may cause an overor underestimation of their perceptual significance. Image-Difference Features Many IDMs create image-difference maps showing perceived pixel deviations between two input images. For image-difference evaluation, these maps are transformed into a single characteristic value, such as the mean or the 95th percentile. However, psychophysical experiments show that the degree of difference visibility is not well correlated with perceived overall image difference [10]. For example, global intensity changes are generally less objectionable than compression artifacts [10]. It is therefore likely that the prediction performance of IDMs that only operate on image-difference maps can be improved. Our approach uses hypotheses of perceptually significant image differences. We call these hypotheses image-difference features (IDFs). Various examples can be found in the literature [10, 11, 12]. Fig. 1 outlines the normalization and feature-extraction steps of our proposed image-difference framework. We assess the relevance of our IDFs using data that relate image distortions (e.g., noise, lossy compression) to perceived image differences. A vector of IDFs is computed for each image pair (reference image and distorted image). This allows us to determine the 19th Color and Imaging Conference Final Program and Proceedings 23 correlations of individual IDFs with the perceived differences of the image pairs, which are expressed by mean opinion scores (MOS). Image Appearance Model Viewing conditions CIEXYZ color space Compute image-di!erence features (IDFs) based on hypotheses of perceptually signi"cant image di!erences Hypothesis

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Image retrieval using the combination of text-based and content-based algorithms

Image retrieval is an important research field which has received great attention in the last decades. In this paper, we present an approach for the image retrieval based on the combination of text-based and content-based features. For text-based features, keywords and for content-based features, color and texture features have been used. Query in this system contains some keywords and an input...

متن کامل

بررسی تأثیر نمایه‌سازی مفهوم-محور تصاویر بر بازیابی آن‌ها با استفاده از موتور جستجوی گوگل

Purpose: The purpose of the present study is to investigate the Impact of Concept-based Image Indexing on Image Retrieval via Google. Due to the importance of images, this article focuses on the features taken into account by Google in retrieving the images. Methodology: The present study is a type of applied research, and the research method used in it comes from quasi-experimental and techno...

متن کامل

Predicting Cut Rose Stages of Development and Leaf Color Variations by Means of Image Analysis Technique

The monitor and prediction of crop developmental stages, particularly harvest time, play an important role in planning greenhouse cropping programs and timetables by cut rose producers. There have been many scientific reports on the application of image analysis technology in estimating greenhouse crop growth stages. In the present research, we studied leaf color variations over time by taking ...

متن کامل

Image authentication using LBP-based perceptual image hashing

Feature extraction is a main step in all perceptual image hashing schemes in which robust features will led to better results in perceptual robustness. Simplicity, discriminative power, computational efficiency and robustness to illumination changes are counted as distinguished properties of Local Binary Pattern features. In this paper, we investigate the use of local binary patterns for percep...

متن کامل

Comparison of Artificial Neural Network Training Algorithms for Predicting the Weight of Kurdi Sheep using Image Processing

Extended Abstract Introduction and Objective: Due to weakness, the occurrence of unwanted errors, the impact of the environment and exposure to natural events, human always make mistakes in their diagnoses of the environment or different topics, so that different people 's perception of a single and unique event may be very different and be diverse. Nowadays, with the development of image proc...

متن کامل

A Saliency Detection Model via Fusing Extracted Low-level and High-level Features from an Image

Saliency regions attract more human’s attention than other regions in an image. Low- level and high-level features are utilized in saliency region detection. Low-level features contain primitive information such as color or texture while high-level features usually consider visual systems. Recently, some salient region detection methods have been proposed based on only low-level features or hig...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011